Interactivity#

The visualizations consume the Interpret API, and is responsible for both displaying explanations and the underlying rendering infrastructure.

Visualizing with the show method#

Interpret exposes a top-level method show, of which acts as the surface for rendering explanation visualizations. This can produce either a drop down widget or dashboard depending on what’s provided.

Show a single explanation#

For basic use cases, it is good to show an explanation one at a time. The rendered widget will provide a dropdown to select between visualizations. For example, in the event of a global explanation, it will provide an overview, along with graphs for each feature as shown with the code below:

from interpret import set_visualize_provider
from interpret.provider import InlineProvider
set_visualize_provider(InlineProvider())
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split

from interpret.glassbox import ExplainableBoostingClassifier
from interpret import show

df = pd.read_csv(
    "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
    header=None)
df.columns = [
    "Age", "WorkClass", "fnlwgt", "Education", "EducationNum",
    "MaritalStatus", "Occupation", "Relationship", "Race", "Gender",
    "CapitalGain", "CapitalLoss", "HoursPerWeek", "NativeCountry", "Income"
]
X = df.iloc[:, :-1]
y = (df.iloc[:, -1] == " >50K").astype(int)

seed = 42
np.random.seed(seed)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=seed)

ebm = ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
ExplainableBoostingClassifier()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
ebm_global = ebm.explain_global()
show(ebm_global)




Show a specific visualization within an explanation#

Let’s say you are after one specific visualization within an explanation, then you can specify it with a key as the subsequent function argument.

show(ebm_global, "Age")




Show multiple explanations for comparison#

If you running in a local environment (such as a running Python on your laptop), then show can expose a dashboard for comparison which can be invoked the in the following way (provide a list of explanations in the first argument):

from interpret.glassbox import LogisticRegression

# We have to transform categorical variables to use Logistic Regression
X_train = pd.get_dummies(X_train, prefix_sep='.').astype(float)

lr = LogisticRegression(random_state=seed, penalty='l1', solver='liblinear')
lr.fit(X_train, y_train)

lr_global = lr.explain_global()
show([ebm_global, lr_global])

Interpret API#

The API is responsible for standardizing ML interpretability explainers and explanations, providing a consistent interface for both users and developers. To support this, it also provides foundational top-level methods that support visualization and data access.

Explainers are glassbox or blackbox algorithms that will produce an explanation, an artifact that is ready for visualizations or further data processing.

Explainer#

An explainer will produce an explanation from its .explain_* method. These explanations normally provide an understanding of global model behavior or local individual predictions (.explain_global and .explain_local respectively).

class interpret.api.base.ExplainerMixin#
An object that computes explanations.

This is a contract required for InterpretML.

Variables:
  • available_explanations – A list of strings subsetting the following - “perf”, “data”, “local”, “global”.

  • explainer_type – A string that is one of the following - “blackbox”, “model”, “specific”, “data”, “perf”.

Explanation#

An explanation is a self-contained object that help understands either its target model behavior, or a set of individual predictions. The explanation should provide access to visualizations through .visualize, and data processing the .data method. Both .visualize and .data should share the same function signature in terms of arguments.

class interpret.api.base.ExplanationMixin#
The result of calling explain_* from an Explainer. Responsible for providing data and/or visualization.

This is a contract required for InterpretML.

Variables:
  • explanation_type – A string that is one of the explainer’s available explanations. Should be one of “perf”, “data”, “local”, “global”.

  • name – A string that denotes the name of the explanation for display purposes.

  • selector – An optional dataframe that describes the data. Each row of the dataframe corresponds with a respective data item.

abstract data(key=None)#

Provides specific explanation data.

Parameters:

key – A number/string that references a specific data item.

Returns:

A serializable dictionary.

abstract visualize(key=None)#

Provides interactive visualizations.

Parameters:

key – Either a scalar or list that indexes the internal object for sub-plotting. If an overall visualization is requested, pass None.

Returns:

A Plotly figure, html as string, or a Dash component.

Show#

The show method is used as a universal function that provides visualizations for whatever explanation(s) is provided in its arguments. Implementation-wise it will provide some visualization platform (i.e. a dashboard or widget) and expose the explanation(s)’ visualizations as given by the .visualize call.

class interpret.show(explanation, key=- 1, **kwargs)#

Provides an interactive visualization for a given explanation(s).

By default, visualization provided is not preserved when the notebook exits.

Parameters:
  • explanation – Either a scalar Explanation or list of Explanations to render as visualization.

  • key – Specific index of explanation to visualize.

  • **kwargs – Kwargs passed down to provider’s render() call.

Returns:

None.